摘要 :
Purpose - The purpose of this paper is to design and implement a team of middle size soccer robots to conform RoboCup middle-size league. Design/methodology/approach - First, according to the rules of RoboCup, a middle size soccer...
展开
Purpose - The purpose of this paper is to design and implement a team of middle size soccer robots to conform RoboCup middle-size league. Design/methodology/approach - First, according to the rules of RoboCup, a middle size soccer robot was designed. The proposed autonomous soccer robot consists of the mechanical platform, motion control module, omni-directional vision module, front vision module, image processing and recognition module, investigated target object positioning and real coordinate reconstruction, robot path planning, competition strategies, and obstacle avoidance. This soccer robot equips the laptop computer system and interface circuits to make decisions. Findings - In fact, the omni-directional vision sensor of the vision system deals with the image processing and positioning for obstacle avoidance and target tracking. The boundary-following algorithm is applied to find the important features of the field. The sensor data fusion method is utilized in the control system parameters, self-localization, and world modeling. A vision-based self-localization, and the conventional odometry systems are fused for robust self-localization. The localization algorithm includes filtering, sharing, and integration of the data for different types of objects recognized in the environment. Originality/value - This paper presents results of research work in the field of autonomous robot-middle size soccer robot supported by IAUKhorasgan Branch (Isfahan).
收起
摘要 :AbstractMeasurement of the spreading capability of nodes has been one of the most attractive challenges in the field of social networks. Because of the huge number of nodes in a network, it has appealed to many researchers to find![CDATA[...
展开AbstractMeasurement of the spreading capability of nodes has been one of the most attractive challenges in the field of social networks. Because of the huge number of nodes in a network, it has appealed to many researchers to find an accurate measure which can potentially detect the spreading capability and rankings of nodes. Most of the available methods determine the spreading capability of nodes based on their topological locations. In this paper, however, we have proposed a new measure based on the basic notions in information theory to detect the spreading capability of nodes in networks on the basis of their topological information. The simulation and experimental results of investigating real-world and artificial networks show that the proposed measure is more accurate and efficient than the similar ones.]]>
收起
摘要 :
Wireless sensor network contains very large number of tiny sensors; some nodes with known position are recognized as guide nodes. Other nodes with unknown position are localized by guide nodes. This article uses the combination of...
展开
Wireless sensor network contains very large number of tiny sensors; some nodes with known position are recognized as guide nodes. Other nodes with unknown position are localized by guide nodes. This article uses the combination of fixed and mobile guide nodes in wireless network localization. So nearly 20% of nodes are fixed guide nodes and three nodes are intended as mobile guide nodes. To evaluate the proficiency, the proposed algorithm has been successfully studied and verified through simulation. Low cost, high accuracy, and low power consumption of nodes and complete coverage are the benefits of this approach and long term in localization is the disadvantage of this method.
收起
摘要 :
The aim of this study is to design adaptive neural-fuzzy inference system (ANFIS) model and fuzzy expert system for determination of concrete mix designs and finally compare their results. Idea of these systems based on two survey...
展开
The aim of this study is to design adaptive neural-fuzzy inference system (ANFIS) model and fuzzy expert system for determination of concrete mix designs and finally compare their results. Idea of these systems based on two surveys: first, ACI structures and principles, second a concrete mix designs dataset that collected via Prof. I-Cheng Yeh. Datasets that loaded in to ANFIS has 552 mix designs and based on ACI mix designs. Moreover, in this study, we have designed fuzzy expert system. Input fields of fuzzy expert system are Slump, Maximum Size of Aggregate (D_(max)), Concrete Compressive Strength (CCS), and Fineness Modulus. Output fields are quantities of water, cement, fine aggregate (F.A.) and coarse aggregate (C.A.). In the ANFIS model, we have four layers (four ANFIS models): the first layer takes values of D_(max) and Slump and then determines the quantity of Water, the second layer takes values of Water (computed in the past layer) and CCS then measures the value of Cement, the third layer takes values of D_(max) and Slump to compute C.A. and the fourth layer takes values of Water, Cement, and C.A. (determined in past layers) and then measures the value of F.A. When these systems were designed and tested, comparison between two systems (FIS and ANFIS) results showed that results of ANFIS model are better than fuzzy expert system's results. In the ANFIS model, for Water output field, training and average testing errors are 0.86 and 0.8. For cement field, training error and average testing error are in the orders of 0.21 and 0.22. Training and average testing error of C.A. are in the orders of 0.0001 and 0.0004 and finally, training and average testing errors of F.A. are in the orders of 0.0049 and 0.0063. Results of fuzzy expert system in comparison to ACI results follow average errors: average error of Water, Cement, C.A., and F.A. are in the orders of 9.5%, 27.6%, 96.5%, and 49%.
收起
摘要 :
Concurrency control (CC) algorithms guarantee the correctness and consistency criteria for concurrent execution of a set of transactions in a database. A precondition that is seen in many CC algorithms is that the writeset (WS) an...
展开
Concurrency control (CC) algorithms guarantee the correctness and consistency criteria for concurrent execution of a set of transactions in a database. A precondition that is seen in many CC algorithms is that the writeset (WS) and readset (RS) of transactions should be known before the transaction execution. However, in real operational environments, we know the WS and RS only for a fraction of transaction set before execution. However, optional knowledge about WS and RS of transactions is one of the advantages of the proposed CC algorithm in this paper. If the WS and RS are known before the transaction execution, the proposed algorithm will use them to improve the concurrency and performance. On the other hand, the concurrency control algorithms often use a specific static or dynamic equation in making decision about granting a lock or detection of the winner transaction. The proposed algorithm in this paper uses an adaptive resonance theory (ART)-based neural network for such a decision making. In this way, a parameter called health factor (HF) is defined for transactions that is used for comparing the transactions and detecting the winner one in accessing the database objects. HF is calculated using ART2 neural network. Experimental results show that the proposed neural-based CC (NCC) algorithm increases the level of concurrency by decreasing the number of aborts. The performance of proposed algorithm is compared with strict two-phase locking (S2PL) algorithm, which has been used in most commercial database systems. Simulation results show that the performance of proposed NCC algorithm, in terms of number of aborts, is better than S2PL algorithm in different transaction rates.
收起
摘要 :
Task scheduling problem has become an active research topic due to the tremendous growth in the use of cloud computing. Cloud computing is a heterogeneous system and it holds and processes large amount of complex data. Scheduling ...
展开
Task scheduling problem has become an active research topic due to the tremendous growth in the use of cloud computing. Cloud computing is a heterogeneous system and it holds and processes large amount of complex data. Scheduling tasks efficiently can lead to better performance and more throughputs in the system. In this paper in order to minimize the cost of processing time and optimize performance, an improved scheduling algorithm is proposed to schedule tasks in cloud computing environments. Black Hole algorithm is a metaheuristic algorithm which is inspired by black hole phenomenon. This algorithm outperforms classical methods by using swarm behavior for searching solutions. We considered two costs for each task: memory and computation. A comparison is made between proposed algorithm and particle swarm optimization algorithm in task scheduling. Experimental results proved that proposed task scheduling algorithm is more accurate and has better performance.
收起
摘要 :
In this paper, for decrease transmissions delay in wireless sensor network, a distributed algorithm (DATS) suggested. Because this network have got limited energy, therefore the energy issue is very important. The exist protocols ...
展开
In this paper, for decrease transmissions delay in wireless sensor network, a distributed algorithm (DATS) suggested. Because this network have got limited energy, therefore the energy issue is very important. The exist protocols due to have transmission delay in sending data among nodes cause increase consuming energy in network. DATS because of is distributing solve the problem. In this algorithm, a distinguished node periodically sends time values along a spanning tree structure. One of the most important advantages of the DATS is purely distributed, local, and does not depend on a fixed structure for disseminating time values. In conclusion, results gained shows that suggested protocol decrease transmissions delay of wireless network above 30 percent compared to the exist approaches.
收起
摘要 :
The purpose of this study is to provide a model for clustering insurance customers so that we can found any uncertainties and risks regarding customer classes, as well as influential variables in their behavior. In this regard, in...
展开
The purpose of this study is to provide a model for clustering insurance customers so that we can found any uncertainties and risks regarding customer classes, as well as influential variables in their behavior. In this regard, information related to the car insurance in Alborz Insurance Company has been collected from 2009-2013. At the first we have studied and preprocessed data and then used RFM technique and two ARFM and SRFMA approaches as well as studying the risk of customers that is due to the compensation, using k and fuzzy k-means clustering methods, customer loyalty has been reviewed. Next, we studied RFM techniques and proposed approaches as well as risk of customers in combination with each other. Numerical results affirm the superiority of fuzzy based approaches which outperforms traditional ones in term of the accuracy measures. Considering risk factor as an addition to proposed RFM scheme, the measure was enhanced and increased by 8.41%.
收起
摘要 :
In the information systems (IS) domain, technology adoption has been one of the most extensively researched areas. Although in the last decade various models had been introduced to address the acceptance or rejection of informatio...
展开
In the information systems (IS) domain, technology adoption has been one of the most extensively researched areas. Although in the last decade various models had been introduced to address the acceptance or rejection of information systems, there is still a lack of existing studies regarding a comprehensive review and classification of researches in this area. The main objective of this study is steered toward gaining a comprehensive understanding of the progresses made in the domain of IT adoption research, by highlighting the achievements, setbacks, and prospects recorded in this field so as to be able to identify existing research gaps and prospective areas for future research. This paper aims at providing a comprehensive review on the current state of IT adoption research. A total of 330 articles published in IS ranked journals between the years 2006 and 2015 in the domain of IT adoption were reviewed. The research scope was narrowed to six perspectives, namely year of publication, theories underlining the technology adoption, level of research, dependent variables, context of the technology adoption, and independent variables. In this research, information on trends in IT adoption is provided by examining related research works to provide insights and future direction on technology adoption for practitioners and researchers. This paper highlights future research paths that can be taken by researchers who wish to endeavor in technology adoption research. It also summarizes the key findings of previous research works including statistical findings of factors that had been introduced in IT adoption studies.
收起
摘要 :
This paper presents an exposition of a new method of swarm intelligence-based algorithm for optimization. Modeling swallow swarm movement and their other behavior, this optimization method represents a new optimization method. The...
展开
This paper presents an exposition of a new method of swarm intelligence-based algorithm for optimization. Modeling swallow swarm movement and their other behavior, this optimization method represents a new optimization method. There are three kinds of particles in this method: explorer particles, aimless particles, and leader particles. Each particle has a personal feature but all of them have a central colony of flying. Each particle exhibits an intelligent behavior and, perpetually, explores its surroundings with an adaptive radius. The situations of neighbor particles, local leader, and public leader are considered, and a move is made then. Swallow swarm optimization algorithm has proved high efficiency, such as fast move in flat areas (areas that there is no hope to find food and, derivation is equal to zero), not getting stuck in local extremum points, high convergence speed, and intelligent participation in the different groups of particles. SSO algorithm has been tested by 19 benchmark functions. It achieved good results in multimodal, rotated and shifted functions. Results of this method have been compared to standard PSO, FSO algorithm, and ten different kinds of PSO.
收起